home *** CD-ROM | disk | FTP | other *** search
- Path: mayne.ugrad.cs.ubc.ca!not-for-mail
- From: c2a192@ugrad.cs.ubc.ca (Kazimir Kylheku)
- Newsgroups: comp.lang.c
- Subject: Re: microsecond delay
- Date: 29 Mar 1996 10:33:31 -0800
- Organization: Computer Science, University of B.C., Vancouver, B.C., Canada
- Message-ID: <4jhadrINNsim@mayne.ugrad.cs.ubc.ca>
- References: <14546@828046609> <4jgo7h$n7i@darkstar.UCSC.EDU>
- NNTP-Posting-Host: mayne.ugrad.cs.ubc.ca
-
- In article <4jgo7h$n7i@darkstar.UCSC.EDU>, Moth <moth@cse.ucsc.edu> wrote:
- >In article <14546@828046609>, dan j <djacobso@cs.indiana.edu> wrote:
- >>We are writing a program that reads bar codes swiped on a card reader.
- >>
- >>The problem is that when we write to the device on the serial line,
- >>the following read is too fast for the device. We have to delay the
- >>read briefly. We can use "sleep" to delay a second, but that's too
- >>long.
- >>
- >>Is there any way effect delays in miliseconds without resorting to
- >>loops?
- >
- >void usleep(unsigned long usec);
-
- Unfortunately, this is defined only by the BSD 4.3+ ``standard'', not POSIX and
- certainly not ANSI C.
-
- If you want a POSIX.1 compliant fine-grained sleep, you have to implement it in
- terms of select():
-
- #include <sys/time.h>
-
- int microsleep(int microseconds)
-
- {
- struct timeval tm;
-
- tm.tv_sec = microseconds / 1000000;
- tm.tv_usec = microseconds % 1000000;
-
- return select(0, NULL, NULL, NULL, &tm);
- }
-
- This will wait for _at_least_ an interval equal to microseconds.
-
- In ANSI C, the only way to get finer delays is to call the clock() function and
- poll, which is wasteful of CPU time on MT systems.
- --
-
-